10 research outputs found

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Simulating the effects of logic faults in implementation-level VITAL-compliant models

    Full text link
    [EN] Simulation-based fault injection is a well-known technique to assess the dependability of hardware designs specified using hardware description languages (HDL). Although logic faults are usually introduced in models defined at the register transfer level (RTL), most accurate results can be obtained by considering implementation-level ones, which reflect the actual structure and timing of the circuit. These models consist of a list of interconnected technology-specific components (macrocells), provided by vendors and annotated with post-place-and-route delays. Macrocells described in the very high speed integrated circuit HDL (VHDL) should also comply with the VHDL initiative towards application specific integrated circuit libraries (VITAL) standard to be interoperable across standard simulators. However, the rigid architecture imposed by VITAL makes that fault injection procedures applied at RTL cannot be used straightforwardly. This work identifies a set of generic operations on VITAL-compliant macrocells that are later used to define how to accurately simulate the effects of common logic fault models. The generality of this proposal is supported by the definition of a platform-specific fault procedure based on these operations. Three embedded processors, implemented using the Xilinx¿s toolchain and SIMPRIM library of macrocells, are considered as a case study, which exposes the gap existing between the robustness assessment at both RTL and implementation-level.This work has been partially funded by the Ministerio de Economia, Industria y Competitividad of Spain under grant agreement no TIN2016-81075-R, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) of Universitat Politecnica de Valencia.Tuzov, I.; De-Andrés-Martínez, D.; Ruiz, JC. (2019). Simulating the effects of logic faults in implementation-level VITAL-compliant models. Computing. 101(2):77-96. https://doi.org/10.1007/s00607-018-0651-4S77961012Baraza JC, Gracia J, Blanc S, Gil D, Gil P (2008) Enhancement of fault injection techniques based on the modification of vhdl code. IEEE Tran Very Large Scale Integr Syst 16:693–706Baraza JC, Gracia J, Gil D, Gil P (2002) A prototype of a vhdl-based fault injection tool: description and application. Journal of Systems Architecture 47(10):847–867Benites LAC, Kastensmidt FL (2017) Fault injection methodology for single event effects on clock-gated asics. In: IEEE Latin American test symposium. IEEE, pp 1–4Benso A, Prinetto P (2003) Fault injection techniques and tools for VLSI reliability evaluation. Frontiers in electronic testing. Kluwer Academic Publishers, BerlinCobham Gaisler AB: LEON3 processor product sheet (2016). https://www.gaisler.com/doc/leon3_product_sheet.pdfCohen B (2012) VHDL coding styles and methodologies. Springer, New YorkDas SR, Mukherjee S, Petriu EM, Assaf MH, Sahinoglu M, Jone WB (2006) An improved fault simulation approach based on verilog with application to ISCAS benchmark circuits. In: IEEE instrumentation and measurement technology conference, pp 1902–1907Fernandez V, Sanchez P, Garcia M, Villar E (1994) Fault modeling and injection in VITAL descriptions. In: Third annual Atlantic test workshop, pp o1–o4Gil D, Gracia J, Baraza JC, Gil P (2003) Study, comparison and application of different vhdl-based fault injection techniques for the experimental validation of a fault-tolerant system. J Syst Archit 34(1):41–51Gil P, Arlat J, Madeira H, Crouzet Y, Jarboui T, Kanoun K, Marteau T, Duraes J, Vieira M, Gil D, Baraza JC, Gracia J (2002) Fault representativeness. Technical report, dependability benchmarking projectGuthaus MR, Ringenberg JS, Ernst D, Austin TM, Mudge T, Brown RB (2001) MiBench: a free, commercially representative embedded benchmark suite. In: IEEE 4th annual workshop on workload characterization, pp 3–14IEEE Standard for VITAL ASIC (Application Specific Integrated Circuit) (2000) Modeling specification. Institute of Electrical and Electronic Engineers, StandardIEEE Standard VHDL Language Reference Manual (2008) Institute of Electrical and Electronic Engineers, StandardIEEE Standard for Standard Delay Format (SDF) for the Electronic Design Process. Institute of Electrical and Electronic Engineers, Standard (2001)Jenn E, Arlat J, Rimen M, Ohlsson J, Karlsson J (1994) Fault injection into VHDL models: the MEFISTO tool. In: International symposium on fault-tolerant computing, pp 66–75Kochte MA, Schaal M, Wunderlich HJ, Zoellin CG (2010) Efficient fault simulation on many-core processors. In: Design automation conference, pp 380–385Mansour W, Velazco R (2013) An automated seu fault-injection method and tool for HDL-based designs. IEEE Trans Nucl Sci 60(4):2728–2733Mentor Graphics (2016) Questa SIM command reference manual 10.7b, Document Revision 3.5. https://www.mentor.com/products/fv/modelsim/Munden R (2000) Inverter, STDN library. Free model foundry VHDL model list. https://freemodelfoundry.com/fmf_models/stnd/std04.vhdMunden R (2004) ASIC and FPGA verification: a guide to component modeling. Systems on silicon. Elsevier, AmsterdamNa J, Lee D (2011) Simulated fault injection using simulator modification technique. ETRI J 33(1):50–59Nimara S, Amaricai A, Popa M (2015) Sub-threshold cmos circuits reliability assessment using simulated fault injection based on simulator commands. In: IEEE International Symposium on Applied Computational Intelligence and Informatics, pp 101–104Oregano Systems GmbH (2013) MC8051 IP Core, user guide (V 1.2) 2013. http://www.oreganosystems.at/download/mc8051_ug.pdfRomani E (1998) Structural PIC165X microcontroller. Hamburg VHDL archive. https://tams-www.informatik.uni-hamburg.de/vhdlShaw D, Al-Khalili D, Rozon C (2006) Automatic generation of defect injectable VHDL fault models for ASIC standard cell libraries. Integr VLSI J 39(4):382–406Shaw DB, Al-Khalili D (2003) IC bridge fault modeling for IP blocks using neural network-based VHDL saboteurs. IEEE Trans Comput 10:1285–1297Short KL (2008) VHDL for engineers, 1st edn. Pearson, LondonSieh V, Tschache O, Balbach F (1997) Verify: evaluation of reliability using VHDL-models with embedded fault descriptions. In: International symposium on fault-tolerant computing, pp 32–36Singh L, Drucker L (2004) Advanced verification techniques. Frontiers in electronic testing. Springer, New YorkTuzov I, de Andrés D, Ruiz JC (2017) Dependability-aware design space exploration for optimal synthesis parameters tuning. In: IEEE/IFIP international conference on dependable systems and networks, pp 1–12Tuzov I, de Andrés D, Ruiz JC (2017) Robustness assessment via simulation-based fault injection of the implementation level models of the LEON3, MC8051, and PIC microcontrollers in presence of stuck-at, bit-flip, pulse, and delay fault models [Data set], Zenodo. https://doi.org/10.5281/zenodo.891316Tuzov I, de Andrés D, Ruiz JC (2018) DAVOS: EDA toolkit for dependability assessment, verification, optimization and selection of hardware models. In: IEEE/IFIP international conference on dependable systems and networks, pp 322–329Tuzov I, Ruiz JC, de Andrés D (2017) Accurately simulating the effects of faults in VHDL models described at the implementation-level. In: European dependable computing conference, pp 10–17Wang LT, Chang YW, Cheng KT (2009) Electronic design automation: synthesis, verification, and test. Morgan Kaufmann, BurlingtonXilinx: Synthesis and simulation design guide, UG626 (v14.4) (2012). https://www.xilinx.com/support/documentation/sw_manuals/xilinx14_7/sim.pd

    Reversing FPGA Architectures for Speeding up Fault Injection: does it pay?

    Full text link
    [EN] Although initially considered for fast system prototyping, Field Programmable Gate Arrays (FPGAs) are gaining interest for implementing final products thanks to their inherent reconfiguration capabilities. As they are susceptible to soft errors in their configuration memory, the dependability of FPGA-based designs must be accurately evaluated to be used in critical systems. In recent years, research has focused on speeding up fault injection in FPGA-based systems by parallelising experimentation, reducing the injection time, and decreasing the number of experiments. Going a step further requires delving into the FPGA architecture, i.e. precisely determining which components are implementing the considered design (mapping) and which are exercised by the considered workload (profiling). After that, fault injection campaigns can focus on those components actually used to identify critical ones, i.e. those leading the target system to fail. Some manufacturers, like Xilinx, identify those bits in the FPGA configuration memory that may change the implemented design when affected by a soft error. However, their correspondence to particular components of the FPGA fabric and their relationship with the implementation-level model are yet unknown. This paper addresses whether the effort of reversing an FPGA architecture to filter out redundant and unused essential bits pays in terms of experimental time. Since the work of reversing the complete architecture of an FPGA is titanic, as the first step towards this ambitious goal, this paper focuses on those elements in charge of implementing the combinational logic of the design (Look-Up Tables). The experimental results that support this study derive from implementing three soft-core processors on a Zynq SoC FPGA and show the interest of the proposal.Grant PID2020-120271RB-I00 funded by MCIN/AEI/10.13039/501100011033.Tuzov, I.; De-Andrés-Martínez, D.; Ruiz, JC. (2022). Reversing FPGA Architectures for Speeding up Fault Injection: does it pay?. IEEE. 81-88. https://doi.org/10.1109/EDCC57035.2022.00023818

    Improving the Robustness of Redundant Execution with Register File Randomization

    Full text link
    [EN] Staggered Redundant execution (SRE) is a fault-tolerance mechanism that has been widely deployed in the context of safety-critical applications. SRE not only protects the system in the presence of faults but also helps relaxing safety requirements of individual elements. However, in this paper, we show that SRE does not effectively protect the system against a wide range of faults and thus, new mechanisms to increase the diversity of homogeneous cores are needed. In this paper, we propose Register File Randomization (RFR), a low-cost diversity mechanism that significantly increases the robustness of homogeneous multicores in front of common-cause faults (CCFs) and register file wearout. Our results show that RFR completely removes the failure rate for register file CCFs for certain workloads and reduces by a factor of 5X the impact of stress related register file aging for the workloads analysed. Our implementation requires less than 50 RTL lines of code and the area (FPGA logic) overhead of RFR is less than 0.2% of a 64-bit RISC-V core FPGA implementation.This work has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 877056 and the Agencia Estatal de Investigacion from Spain under grant agreement no. PCI2020-112092, and from the the European Unions Horizon 2020 research and innovation programme under grant agreement no. 871467.Tuzov, I.; Andreu, P.; Medina, L.; Picornell-Sanjuan, T.; Robles Martínez, A.; López Rodríguez, PJ.; Flich Cardo, J.... (2021). Improving the Robustness of Redundant Execution with Register File Randomization. IEEE. 1-9. https://doi.org/10.1109/ICCAD51958.2021.96434661

    A Survey of Recent Developments in Testability, Safety and Security of RISC-V Processors

    Get PDF
    With the continued success of the open RISC-V architecture, practical deployment of RISC-V processors necessitates an in-depth consideration of their testability, safety and security aspects. This survey provides an overview of recent developments in this quickly-evolving field. We start with discussing the application of state-of-the-art functional and system-level test solutions to RISC-V processors. Then, we discuss the use of RISC-V processors for safety-related applications; to this end, we outline the essential techniques necessary to obtain safety both in the functional and in the timing domain and review recent processor designs with safety features. Finally, we survey the different aspects of security with respect to RISC-V implementations and discuss the relationship between cryptographic protocols and primitives on the one hand and the RISC-V processor architecture and hardware implementation on the other. We also comment on the role of a RISC-V processor for system security and its resilience against side-channel attacks

    Tuning synthesis flags to optimize implementation goals: Performance and robustness of the LEON3 processor as a case study

    Full text link
    [EN] The steady growth in complexity of FPGAs has led designers to rely more and more on manufacturers¿ and third parties¿ design tools to meet their implementation goals. However, as modern synthesis tools provide a myriad of different optimization flags, whose contribution towards each implementation goal is not clearly accounted for, designers just make use of a handful of those flags. This paper addresses the challenging problem of determining the best configuration of available synthesis flags to optimize the designer¿s implementation goals. First, fractional factorial design is used to reduce the whole design space. Resulting configurations are implemented to estimate the actual impact, and statistical significance, of each considered synthesis flag. After that, multiple regression analysis techniques predict the expected outcome for each possible combination of these flags. Finally, multiple-criteria decision making techniques enable the selection of the best set of synthesis flags according to explicitly defined implementation goals.This work has been partially funded by the Ministerio de Economia, Industria y Competitividad de Espana under grant agreement no. TIN2016-81075-R, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) de la Universitat Politecnica de Valencia.Tuzov, I.; Andrés, DD.; Ruiz, JC. (2018). Tuning synthesis flags to optimize implementation goals: Performance and robustness of the LEON3 processor as a case study. Journal of Parallel and Distributed Computing. 112:84-96. https://doi.org/10.1016/j.jpdc.2017.10.002S849611

    Robustness assessment via simulation-based fault injection of the implementation level models of the LEON3, MC8051, and PIC microcontrollers in presence of stuck-at, bit-flip, pulse, and delay fault models

    No full text
    This dataset package contains the results of fault injection experiments for RTL and implementation-level models 3 microprocessors (LEON3, MC8051, PIC). 1. Package contents: - raw traces of fault injection experiments: *.lst files - analysis results grouped by fault models: *.html files in the /REPORT subfolders - summary for each targeted HDL model and fault model: index.html in each design folder To facilitate the navigation through the contents, it is organized as a tree of *.html pages. The root page is 'index.html' in the archive root. Additionally, to observe the raw traces for each experiment in the convenient form, the *.lst files are processed on the fly by custom python-script, returning the interactive *.html page. Each observation trace is a table, where: - each row represents an observation vector, comprising {simulation time stamp}, {flags}, {internal state}, {outputs}. - each cell is highlighted: a) green if matches with reference trace (fault-free simulation), b) red if mismatches with reference trace, denoting error for internals / failure for outputs, c) violet in case of vector whose timestamp was not in reference (unexpected transition). These highlighting options can be customized by modifying the linked *.css files. 2. Installation 2.1 Ensure to have Web-Server installed (Apache preferable). For instance, XAMPP: https://www.apachefriends.org/index.html 2.2 Ensure to have python ver. 2.x installed. Type in terminal (cmd console in Windows): “python --version” – if the output looks like > Python 2.x.x – python is installed. Otherwise install the relevant 2.x.x distribution: https://www.python.org/ Add python installation path to environment path variable. 2.2 Ensure that Web-server is configured to execute CGI scripts, particularly python-scripts: In the 'httpd.conf' file (XAMMP control panel – button config in front of apache module): – search for line Options Indexes FollowSymLinks and add ExecCGI, so the resulting line looks like this: Options Indexes FollowSymLinks ExecCGI – search for #AddHandler cgi-script .cgi, uncomment (remove #), and append “.py” to this line, so the resulting line: AddHandler cgi-script .cgi .pl .asp .py 2.3 Unpack the *.zip package into the folder on the Web Server. For instance 'Web-server root folder'/ExperimentalResults. The Web-Server root can be configured in the ‘httpd.conf’ file in the DocumentRoot section, for instance: DocumentRoot "F:/HTWEB" <Directory "F:/HTWEB"> ... 2.4 In the web-browser navigate to the root directory of extracted package: http://localhost/ExperimentalResults/index.html 3. How to read the contents The root page contains links to different analysis reports, for each HDL design under study and considered fault models. The pages on the first tree level, represent the summary for each injection campaign, describing the rate of failure modes, number of experiments, latencies, and supplementary info. The pages on the second tree level are the detailed analysis reports for each experiment, detailing the fault target, parameters of injected fault, detected failure mode, number of errors, etc. The cells of the first column are highlighted a) in green if injection did not cause the failure, b) in red otherwise. The links in this first column navigate to the detailed traces for each experiment. The latter requires that Web-server is configured to execute the python-scripts (see section 2 - Installation); otherwise the raw traces (*.lst files in ./results folders) can be observed by any text editor (notepad++, etc.)

    Example results for use-cases of DAVOS toolkit (dependability assessment, verification, optimisation and selection of hardware models)

    No full text
    <p>This dataset exemplifies the results that can be obtained for several basic experimentation scenarios by means of DAVOS toolkit, available under MIT licence at https://github.com/IlyaTuzov/DAVOS</p> <p>Particular experimentation scenarios are:</p> <p>- Dependability assessment (LEON3 processor core)<br> - Dependability benchmarking of implementation alternatives (MC8051 processor core)<br> - Dependability-aware design space exploration (when implementing PIC core by Xilinx ISE toolkit)</p> <p><br> <strong>Installation steps.</strong></p> <p>1. Ensure to have python ver. 2.x installed. Type in terminal (cmd console in Windows): “python --version” – if the output looks like > Python 2.x.x – python is installed. <br> Otherwise download and install 2.x.x distribution: https://www.python.org/<br> Add python installation path to environment path variable.</p> <p><br> 2. Ensure to have Web-Server installed (Apache preferable). For instance, XAMPP: https://www.apachefriends.org/index.html</p> <p>  <br> 3. Ensure that Web-server is configured to execute CGI scripts, particularly python-scripts:<br> In the 'httpd.conf' file (XAMMP control panel – button config in front of apache module):<br>  </p> <p>– search for line Options Indexes FollowSymLinks and add ExecCGI, so the resulting line looks like this: <br> Options Indexes FollowSymLinks ExecCGI<br> – search for #AddHandler cgi-script .cgi, uncomment (remove #), and append “.py” to this line, so the results looks like:<br> AddHandler cgi-script .cgi .pl .asp .py </p> <p>4. Unpack the contents of *.zip package into the folder on the Web Server. <br> For instance into 'Web-server root folder'/Dataset.<br> The Web-Server root can be configured in the ‘httpd.conf’ file in the DocumentRoot section, for instance: <br> DocumentRoot "F:/HTWEB"<br> <br> ...</p> <p>5. In the web-browser navigate to the root directory of extracted package:<br> http://localhost/Dataset/index.html<br>  </p

    Formation of future teachers' meta-competence as a base to develop their potential readiness for continuing education

    No full text
    The purpose of the study is to identify the features of the relationship between the potential readiness of future teachers for lifelong education and the level of development of meta-competence and to develop recommendations on this basis for creating conditions in the university for the development of students' meta-cognitive abilities. Research methods and materials. The study involved 748 students enrolled in teacher education programs. Studied: the level of formation of future teachers of psychological, strategic and competence-based readiness for lifelong education (questionnaire survey); the level of development of their worldview ("Methodology of worldview activity" by D.A. Leontyev, A.N. Ilchenko), intellectual ("Mental performance and the type of intelligence" by B.N. Ryzhov), cognitive ("Methodology for assessing the systemic nature of thinking" I.A. Sychev) and operational-procedural (methods for identifying the level of formation of mental operations (methods for identifying the ability to analyze, compare, generalize and classify) components of metacompetence. Comparative (chi-square test, Student's t-test) and correlation (Spearman's correlation coefficient ) analyzes. Results. A close relationship was revealed between the indicators of the potential readiness of future teachers for lifelong pedagogical education and individual components of metacompetence, characteristics of students' worldview, their intelligence and cognitive abilities. Conclusion. The results of the study suggest that the formation of future teachers' readiness for lifelong education should be based not only and not so much on stimulating their motivational activity, but also on creating conditions for the development of their worldview, intelligence, systemic, creative thinking and general cognitive abilities.O objetivo do estudoé identificar as características da relação entre a potencial preparação dos futuros professores para a educação ao longo da vida e o nível de desenvolvimento da meta-competência e desenvolver recomendações nesta base para criar condições na universidade para o desenvolvimento dos alunos. 'habilidades meta-cognitivas. Métodos e materiais de pesquisa. O estudo envolveu 748 alunos matriculados em programas de formação de professores. Estudados: o nível de formação dos futuros professores de prontidão psicológica, estratégica e baseada nas competências para a educação ao longo da vida (inquérito por questionário); o nível de desenvolvimento de sua visão de mundo ("Metodologia da atividade de visão de mundo" por DA Leontyev, AN Ilchenko), intelectual ("Desempenho mental e o tipo de inteligência" por BN Ryzhov), cognitivo ("Metodologia para avaliar a natureza sistêmica do pensamento" IA Sychev) e operacional-procedimental (métodos para identificar o nível de formação das operações mentais (métodos para identificar a capacidade de analisar, comparar, generalizar e classificar) componentes de metacompetência. Comparativos (teste qui-quadrado, teste t de Student) e análises de correlação (coeficiente de correlação de Spearman). Resultados. Uma estreita relação foi revelada entre os indicadores de prontidão potencial dos futuros professores para a educação pedagógica ao longo da vida e os componentes individuais da metacompetência, características da visão de mundo dos alunos, sua inteligência e habilidades cognitivas.Conclusão. Os resultados do estudo sugerem que a formação da prontidão dos futuros professores para a formação ao longo da vida deve basear-se não só e não tanto no estímulo à sua atividade motivacional, mas também na criação de condições para o desenvolvimento da sua visão de mundo, inteligência, pensamento sistémico, criativo e habilidades cognitivas gerais.El propósito del estudioes identificar las características de la relación entre la preparación potencial de los futuros docentes para la educación permanente y el nivel de desarrollo de la metacompetencia y desarrollar recomendaciones sobre esta base para crear las condiciones en la universidad para el desarrollo de los estudiantes. 'habilidades metacognitivas. Métodos y materiales de investigación. El estudio involucró a 748 estudiantes inscritos en programas de formación docente. Estudiado: el nivel de formación de los futuros profesores de preparación psicológica, estratégica y por competencias para la educación permanente (encuesta por cuestionario); el nivel de desarrollo de su cosmovisión ("Metodología de la actividad de cosmovisión" por DA Leontyev, AN Ilchenko), intelectual ("Rendimiento mental y el tipo de inteligencia" por BN Ryzhov), cognitivo ("Metodología para evaluar la naturaleza sistémica del pensamiento" IA Sychev) y operacional-procedimental (métodos para identificar el nivel de formación de operaciones mentales (métodos para identificar la capacidad de analizar, comparar, generalizar y clasificar) componentes de la metacompetencia. Comparativo (prueba chi-cuadrado, prueba t de Student) y análisis de correlación (coeficiente de correlación de Spearman). Resultados. Se reveló una estrecha relación entre los indicadores de la preparación potencial de los futuros profesores para la educación pedagógica permanente y los componentes individuales de la metacompetencia, las características de la cosmovisión de los estudiantes, su inteligencia y habilidades cognitivas. Conclusión. Los resultados del estudio sugieren que la formación de la preparación de los futuros docentes para la educación permanente debe basarse no solo y no tanto en estimular su actividad motivacional, sino también en crear las condiciones para el desarrollo de su cosmovisión, inteligencia, pensamiento sistémico, creativo y habilidades cognitivas generales
    corecore